618 research outputs found
Retrieval and Registration of Long-Range Overlapping Frames for Scalable Mosaicking of In Vivo Fetoscopy
Purpose: The standard clinical treatment of Twin-to-Twin Transfusion Syndrome
consists in the photo-coagulation of undesired anastomoses located on the
placenta which are responsible to a blood transfer between the two twins. While
being the standard of care procedure, fetoscopy suffers from a limited
field-of-view of the placenta resulting in missed anastomoses. To facilitate
the task of the clinician, building a global map of the placenta providing a
larger overview of the vascular network is highly desired. Methods: To overcome
the challenging visual conditions inherent to in vivo sequences (low contrast,
obstructions or presence of artifacts, among others), we propose the following
contributions: (i) robust pairwise registration is achieved by aligning the
orientation of the image gradients, and (ii) difficulties regarding long-range
consistency (e.g. due to the presence of outliers) is tackled via a bag-of-word
strategy, which identifies overlapping frames of the sequence to be registered
regardless of their respective location in time. Results: In addition to visual
difficulties, in vivo sequences are characterised by the intrinsic absence of
gold standard. We present mosaics motivating qualitatively our methodological
choices and demonstrating their promising aspect. We also demonstrate
semi-quantitatively, via visual inspection of registration results, the
efficacy of our registration approach in comparison to two standard baselines.
Conclusion: This paper proposes the first approach for the construction of
mosaics of placenta in in vivo fetoscopy sequences. Robustness to visual
challenges during registration and long-range temporal consistency are
proposed, offering first positive results on in vivo data for which standard
mosaicking techniques are not applicable.Comment: Accepted for publication in International Journal of Computer
Assisted Radiology and Surgery (IJCARS
Detector-Free Dense Feature Matching for Fetoscopic Mosaicking
Fetoscopic Laser Photocoagulation (FLP) is used to treat Twin-to-twin transfusion syndrome, however, this procedure is hindered because of difficulty in visualizing the intraoperative surgical environment due to limited surgical field-of-view, unusual placenta position, limited maneuverability of the fetoscope and poor visibility due to fluid turbidity and occlusions. Fetoscopic video mosaicking can create an expanded field-of-view (FOV) image of the fetoscopic intraoperative environment, which may support the surgeons in localizing the vascular anastomoses during the FLP procedure. However, existing classical video mosaicking methods tend to perform poorly in vivo fetoscopic videos. We propose the use of transformer-based detector-free local feature matching method as a dense feature matching technique for creating reliable mosaics with minimal drifting error. Using the publicly available fetoscopy placenta dataset, we experimentally show the robustness of the proposed method over the state-of-the-art vessel-based fetoscopic mosaicking method
Automatic C-Plane Detection in Pelvic Floor Transperineal Volumetric Ultrasound
© 2020, Springer Nature Switzerland AG. Transperineal volumetric ultrasound (US) imaging has become routine practice for diagnosing pelvic floor disease (PFD). Hereto, clinical guidelines stipulate to make measurements in an anatomically defined 2D plane within a 3D volume, the so-called C-plane. This task is currently performed manually in clinical practice, which is labour-intensive and requires expert knowledge of pelvic floor anatomy, as no computer-aided C-plane method exists. To automate this process, we propose a novel, guideline-driven approach for automatic detection of the C-plane. The method uses a convolutional neural network (CNN) to identify extreme coordinates of the symphysis pubis and levator ani muscle (which define the C-plane) directly via landmark regression. The C-plane is identified in a postprocessing step. When evaluated on 100 US volumes, our best performing method (multi-task regression with UNet) achieved a mean error of 6.05 mm and 4.81 and took 20 s. Two experts blindly evaluated the quality of the automatically detected planes and manually defined the (gold standard) C-plane in terms of their clinical diagnostic quality. We show that the proposed method performs comparably to the manual definition. The automatic method reduces the average time to detect the C-plane by 100 s and reduces the need for high-level expertise in PFD US assessment
DEEPBEAS3D: Deep Learning and B-Spline Explicit Active Surfaces
Deep learning-based automatic segmentation methods have become
state-of-the-art. However, they are often not robust enough for direct clinical
application, as domain shifts between training and testing data affect their
performance. Failure in automatic segmentation can cause sub-optimal results
that require correction. To address these problems, we propose a novel 3D
extension of an interactive segmentation framework that represents a
segmentation from a convolutional neural network (CNN) as a B-spline explicit
active surface (BEAS). BEAS ensures segmentations are smooth in 3D space,
increasing anatomical plausibility, while allowing the user to precisely edit
the 3D surface. We apply this framework to the task of 3D segmentation of the
anal sphincter complex (AS) from transperineal ultrasound (TPUS) images, and
compare it to the clinical tool used in the pelvic floor disorder clinic (4D
View VOCAL, GE Healthcare; Zipf, Austria). Experimental results show that: 1)
the proposed framework gives the user explicit control of the surface contour;
2) the perceived workload calculated via the NASA-TLX index was reduced by 30%
compared to VOCAL; and 3) it required 7 0% (170 seconds) less user time than
VOCAL (p< 0.00001)Comment: 4 pages, 3 figures, 1 table, conferenc
Placental vessel-guided hybrid framework for fetoscopic mosaicking
Fetoscopic laser photocoagulation is used to treat twin-to-twin transfusion syndrome; however, this procedure is hindered because of difficulty in visualising the intraoperative surgical environment due to limited surgical field-of-view, unusual placenta position, limited manoeuvrability of the fetoscope and poor visibility due to fluid turbidity and occlusions. Fetoscopic video mosaicking can create an expanded field-of-view image of the fetoscopic intraoperative environment, which could support the surgeons in localising the vascular anastomoses during the fetoscopic procedure. However, classical handcrafted feature matching methods fail on in vivo fetoscopic videos. An existing state-of-the-art method on fetoscopic mosaicking relies on vessel presence and fails when vessels are not present in the view. We propose a vessel-guided hybrid fetoscopic mosaicking framework that mutually benefits from a placental vessel-based registration and a deep learning-based dense matching method to optimise the overall performance. A selection mechanism is implemented based on vessels’ appearance consistency and photometric error minimisation for choosing the best pairwise transformation. Using the extended fetoscopy placenta dataset, we experimentally show the robustness of the proposed framework, over the state-of-the-art methods, even in vessel-free, low-textured, or low illumination non-planar fetoscopic views
Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks
Despite the state-of-the-art performance for medical image segmentation, deep
convolutional neural networks (CNNs) have rarely provided uncertainty
estimations regarding their segmentation outputs, e.g., model (epistemic) and
image-based (aleatoric) uncertainties. In this work, we analyze these different
types of uncertainties for CNN-based 2D and 3D medical image segmentation
tasks. We additionally propose a test-time augmentation-based aleatoric
uncertainty to analyze the effect of different transformations of the input
image on the segmentation output. Test-time augmentation has been previously
used to improve segmentation accuracy, yet not been formulated in a consistent
mathematical framework. Hence, we also propose a theoretical formulation of
test-time augmentation, where a distribution of the prediction is estimated by
Monte Carlo simulation with prior distributions of parameters in an image
acquisition model that involves image transformations and noise. We compare and
combine our proposed aleatoric uncertainty with model uncertainty. Experiments
with segmentation of fetal brains and brain tumors from 2D and 3D Magnetic
Resonance Images (MRI) showed that 1) the test-time augmentation-based
aleatoric uncertainty provides a better uncertainty estimation than calculating
the test-time dropout-based model uncertainty alone and helps to reduce
overconfident incorrect predictions, and 2) our test-time augmentation
outperforms a single-prediction baseline and dropout-based multiple
predictions.Comment: 13 pages, 8 figures, accepted by NeuroComputin
Interactive Medical Image Segmentation using Deep Learning with Image-specific Fine-tuning
Convolutional neural networks (CNNs) have achieved state-of-the-art
performance for automatic medical image segmentation. However, they have not
demonstrated sufficiently accurate and robust results for clinical use. In
addition, they are limited by the lack of image-specific adaptation and the
lack of generalizability to previously unseen object classes. To address these
problems, we propose a novel deep learning-based framework for interactive
segmentation by incorporating CNNs into a bounding box and scribble-based
segmentation pipeline. We propose image-specific fine-tuning to make a CNN
model adaptive to a specific test image, which can be either unsupervised
(without additional user interactions) or supervised (with additional
scribbles). We also propose a weighted loss function considering network and
interaction-based uncertainty for the fine-tuning. We applied this framework to
two applications: 2D segmentation of multiple organs from fetal MR slices,
where only two types of these organs were annotated for training; and 3D
segmentation of brain tumor core (excluding edema) and whole brain tumor
(including edema) from different MR sequences, where only tumor cores in one MR
sequence were annotated for training. Experimental results show that 1) our
model is more robust to segment previously unseen objects than state-of-the-art
CNNs; 2) image-specific fine-tuning with the proposed weighted loss function
significantly improves segmentation accuracy; and 3) our method leads to
accurate results with fewer user interactions and less user time than
traditional interactive segmentation methods.Comment: 11 pages, 11 figure
Robust fetoscopic mosaicking from deep learned flow fields
PURPOSE: Fetoscopic laser photocoagulation is a minimally invasive procedure to treat twin-to-twin transfusion syndrome during pregnancy by stopping irregular blood flow in the placenta. Building an image mosaic of the placenta and its network of vessels could assist surgeons to navigate in the challenging fetoscopic environment during the procedure. METHODOLOGY: We propose a fetoscopic mosaicking approach by combining deep learning-based optical flow with robust estimation for filtering inconsistent motions that occurs due to floating particles and specularities. While the current state of the art for fetoscopic mosaicking relies on clearly visible vessels for registration, our approach overcomes this limitation by considering the motion of all consistent pixels within consecutive frames. We also overcome the challenges in applying off-the-shelf optical flow to fetoscopic mosaicking through the use of robust estimation and local refinement. RESULTS: We compare our proposed method against the state-of-the-art vessel-based and optical flow-based image registration methods, and robust estimation alternatives. We also compare our proposed pipeline using different optical flow and robust estimation alternatives. CONCLUSIONS: Through analysis of our results, we show that our method outperforms both the vessel-based state of the art and LK, noticeably when vessels are either poorly visible or too thin to be reliably identified. Our approach is thus able to build consistent placental vessel mosaics in challenging cases where currently available alternatives fail
Automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalising neural network
Segmentation of the levator hiatus in ultrasound allows to extract biometrics
which are of importance for pelvic floor disorder assessment. In this work, we
present a fully automatic method using a convolutional neural network (CNN) to
outline the levator hiatus in a 2D image extracted from a 3D ultrasound volume.
In particular, our method uses a recently developed scaled exponential linear
unit (SELU) as a nonlinear self-normalising activation function, which for the
first time has been applied in medical imaging with CNN. SELU has important
advantages such as being parameter-free and mini-batch independent, which may
help to overcome memory constraints during training. A dataset with 91 images
from 35 patients during Valsalva, contraction and rest, all labelled by three
operators, is used for training and evaluation in a leave-one-patient-out
cross-validation. Results show a median Dice similarity coefficient of 0.90
with an interquartile range of 0.08, with equivalent performance to the three
operators (with a Williams' index of 1.03), and outperforming a U-Net
architecture without the need for batch normalisation. We conclude that the
proposed fully automatic method achieved equivalent accuracy in segmenting the
pelvic floor levator hiatus compared to a previous semi-automatic approach
- …